<MPSCNNConvolutionDataSource >(3) MetalPerformanceShaders.framework <MPSCNNConvolutionDataSource >(3)

<MPSCNNConvolutionDataSource >

#import <MPSCNNConvolution.h>

Inherits <MPSStateNSCopying>, and <NSObject>.


(MPSDataType) - dataType
(MPSCNNConvolutionDescriptor *__nonnull) - descriptor
(void *__nonnull) - weights
(float *__nullable) - biasTerms
(BOOL) - load
(void) - purge
(NSString *__nullable) - label
(vector_float2 *__nonnull) - rangesForUInt8Kernel
(float *__nonnull) - lookupTableForUInt8Kernel
(MPSCNNWeightsQuantizationType) - weightsQuantizationType
(MPSCNNConvolutionWeightsAndBiasesState *__nullable) - updateWithCommandBuffer:gradientState:sourceState:
(BOOL) - updateWithGradientState:sourceState:
(nonnull instancetype) - copyWithZone:device:

- (float * __nullable MPSCNNConvolutionDataSource) biasTerms [required]

Returns a pointer to the bias terms for the convolution. Each entry in the array is a single precision IEEE-754 float and represents one bias. The number of entries is equal to outputFeatureChannels.

Frequently, this function is a single line of code to return a pointer to memory allocated in -load. It may also just return nil.

Note: bias terms are always float, even when the weights are not.

- (nonnull instancetype MPSCNNConvolutionDataSource) copyWithZone: (nullable NSZone *) zone(nullable id< MTLDevice >) device [optional]

When copyWithZone:device on convolution is called, data source copyWithZone:device will be called if data source object responds to this selector. If not, copyWithZone: will be called if data source responds to it. Otherwise, it is simply retained. This is to allow application to make a separate copy of data source in convolution when convolution itself is coplied, for example when copying training graph for running on second GPU so that weights update on two different GPUs dont end up stomping same data source.

- (MPSDataType MPSCNNConvolutionDataSource) dataType [required]

Alerts MPS what sort of weights are provided by the object For MPSCNNConvolution, MPSDataTypeUInt8, MPSDataTypeFloat16 and MPSDataTypeFloat32 are supported for normal convolutions using MPSCNNConvolution. MPSCNNBinaryConvolution assumes weights to be of type MPSDataTypeUInt32 always.

- (MPSCNNConvolutionDescriptor * __nonnull MPSCNNConvolutionDataSource) descriptor [required]

Return a MPSCNNConvolutionDescriptor as needed MPS will not modify this object other than perhaps to retain it. User should set the appropriate neuron in the creation of convolution descriptor and for batch normalization use:

-setBatchNormalizationParametersForInferenceWithMean:variance:gamma:beta:epsilon:

Returns:

A MPSCNNConvolutionDescriptor that describes the kernel housed by this object.

- (NSString*__nullable MPSCNNConvolutionDataSource) label [required]

A label that is transferred to the convolution at init time Overridden by a MPSCNNConvolutionNode.label if it is non-nil.

- (BOOL MPSCNNConvolutionDataSource) load [required]

Alerts the data source that the data will be needed soon Each load alert will be balanced by a purge later, when MPS no longer needs the data from this object. Load will always be called atleast once after initial construction or each purge of the object before anything else is called. Note: load may be called to merely inspect the descriptor. In some circumstances, it may be worthwhile to postpone weight and bias construction until they are actually needed to save touching memory and keep the working set small. The load function is intended to be an opportunity to open files or mark memory no longer purgeable.

Returns:

Returns YES on success. If NO is returned, expect MPS object construction to fail.

- (float * __nonnull MPSCNNConvolutionDataSource) lookupTableForUInt8Kernel [optional]

A pointer to a 256 entry lookup table containing the values to use for the weight range [0,255]

- (void MPSCNNConvolutionDataSource) purge [required]

Alerts the data source that the data is no longer needed Each load alert will be balanced by a purge later, when MPS no longer needs the data from this object.

- (vector_float2 * __nonnull MPSCNNConvolutionDataSource) rangesForUInt8Kernel [optional]

A list of per-output channel limits that describe the 8-bit range This returns a pointer to an array of vector_float2[outputChannelCount] values. The first value in the vector is the minimum value in the range. The second value in the vector is the maximum value in the range.

The 8-bit weight value is interpreted as:

float unorm8_weight = uint8_weight / 255.0f;    // unorm8_weight has range [0,1.0]
float max = range[outputChannel].y;
float min = range[outputChannel].x;
float weight = unorm8_weight * (max - min) + min

- (MPSCNNConvolutionWeightsAndBiasesState* __nullable MPSCNNConvolutionDataSource) updateWithCommandBuffer: (__nonnull id< MTLCommandBuffer >) commandBuffer(MPSCNNConvolutionGradientState *__nonnull) gradientState(MPSCNNConvolutionWeightsAndBiasesState *__nonnull) sourceState [optional]

Callback for the MPSNNGraph to update the convolution weights on GPU. It is the resposibility of this method to decrement the read count of both the gradientState and the sourceState before returning. BUG: prior to macOS 10.14, ios/tvos 12.0, the MPSNNGraph incorrectly decrements the readcount of the gradientState after this method is called.

Parameters:

commandBuffer The command buffer on which to do the update. MPSCNNConvolutionGradientNode.MPSNNTrainingStyle controls where you want your update to happen. Provide implementation of this function for GPU side update.
gradientState A state object produced by the MPSCNNConvolution and updated by MPSCNNConvolutionGradient containing weight gradients.
sourceState A state object containing the convolution weights

Returns:

If NULL, no update occurs. If nonnull, the result will be used to update the weights in the MPSNNGraph

- (BOOL MPSCNNConvolutionDataSource) updateWithGradientState: (MPSCNNConvolutionGradientState *__nonnull) gradientState(MPSCNNConvolutionWeightsAndBiasesState *__nonnull) sourceState [optional]

Callback for the MPSNNGraph to update the convolution weights on CPU. MPSCNNConvolutionGradientNode.MPSNNTrainingStyle controls where you want your update to happen. Provide implementation of this function for CPU side update.

Parameters:

gradientState A state object produced by the MPSCNNConvolution and updated by MPSCNNConvolutionGradient containing weight gradients. MPSNNGraph is responsible for calling [gradientState synchronizeOnCommandBuffer:] so that application get correct gradients for CPU side update.
sourceState A state object containing the convolution weights used. MPSCNNConvolution and MPSCNNConvolutionGradient reloadWeightsWithDataSource will be called right after this method is called. Note that the weights returned here may not match the weights in your data source due to conversion loss. These are the weights actually used, and should be what you use to calculate the new weights. Your copy may be incorrect. Write the new weights to your copy and return them out the left hand side.

Returns:

TRUE if success/no error, FALSE in case of failure.

- (void * __nonnull MPSCNNConvolutionDataSource) weights [required]

Returns a pointer to the weights for the convolution. The type of each entry in array is given by -dataType. The number of entries is equal to:

inputFeatureChannels * outputFeatureChannels * kernelHeight * kernelWidth


The layout of filter weight is as a 4D tensor (array) weight[ outputChannels ][ kernelHeight ][ kernelWidth ][ inputChannels / groups ]

Frequently, this function is a single line of code to return a pointer to memory allocated in -load.

Batch normalization parameters are set using -descriptor.

Note: For binary-convolutions the layout of the weights are: weight[ outputChannels ][ kernelHeight ][ kernelWidth ][ floor((inputChannels/groups)+31) / 32 ] with each 32 sub input feature channel index specified in machine byte order, so that for example the 13th feature channel bit can be extracted using bitmask = (1U << 13).

- (MPSCNNWeightsQuantizationType MPSCNNConvolutionDataSource) weightsQuantizationType [optional]

Quantizaiton type of weights. If it returns MPSCNNWeightsQuantizationTypeLookupTable, lookupTableForUInt8Kernel method must be implmented. if it returns MPSCNNWeightsQuantizationTypeLookupLinear, rangesForUInt8Kernel method must be implemented.

Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.

Mon Jul 9 2018 Version MetalPerformanceShaders-119.3